anti-discrimination law
The Cost of Arbitrariness for Individuals: Examining the Legal and Technical Challenges of Model Multiplicity
Ganesh, Prakhar, Daldaban, Ihsan Ibrahim, Cofone, Ignacio, Farnadi, Golnoosh
Model multiplicity, the phenomenon where multiple models achieve similar performance despite different underlying learned functions, introduces arbitrariness in model selection. While this arbitrariness may seem inconsequential in expectation, its impact on individuals can be severe. This paper explores various individual concerns stemming from multiplicity, including the effects of arbitrariness beyond final predictions, disparate arbitrariness for individuals belonging to protected groups, and the challenges associated with the arbitrariness of a single algorithmic system creating a monopoly across various contexts. It provides both an empirical examination of these concerns and a comprehensive analysis from the legal standpoint, addressing how these issues are perceived in the anti-discrimination law in Canada. We conclude the discussion with technical challenges in the current landscape of model multiplicity to meet legal requirements and the legal gap between current law and the implications of arbitrariness in model selection, highlighting relevant future research directions for both disciplines.
- North America > Canada > British Columbia (0.04)
- North America > Canada > Ontario (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (2 more...)
- Law > Civil Rights & Constitutional Law (0.89)
- Information Technology > Security & Privacy (0.67)
Colorado the First State to Move Ahead With Attempt to Regulate AI's Role in American Life
The first attempts to regulate artificial intelligence programs that play a hidden role in hiring, housing and medical decisions for millions of Americans are facing pressure from all sides and floundering in statehouses nationwide. Only one of seven bills aimed at preventing AI's penchant to discriminate when making consequential decisions -- including who gets hired, money for a home or medical care -- has passed. Colorado Gov. Jared Polis hesitantly signed the bill on Friday. Colorado's bill and those that faltered in Washington, Connecticut and elsewhere faced battles on many fronts, including between civil rights groups and the tech industry, and lawmakers wary of wading into a technology few yet understand and governors worried about being the odd-state-out and spooking AI startups. Polis signed Colorado's bill "with reservations," saying in an statement he was wary of regulations dousing AI innovation.
- North America > United States > Colorado (1.00)
- North America > United States > Connecticut (0.26)
- North America > United States > California (0.05)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Multi-dimensional discrimination in Law and Machine Learning -- A comparative overview
Roy, Arjun, Horstmann, Jan, Ntoutsi, Eirini
AI-driven decision-making can lead to discrimination against certain individuals or social groups based on protected characteristics/attributes such as race, gender, or age. The domain of fairness-aware machine learning focuses on methods and algorithms for understanding, mitigating, and accounting for bias in AI/ML models. Still, thus far, the vast majority of the proposed methods assess fairness based on a single protected attribute, e.g. only gender or race. In reality, though, human identities are multi-dimensional, and discrimination can occur based on more than one protected characteristic, leading to the so-called ``multi-dimensional discrimination'' or ``multi-dimensional fairness'' problem. While well-elaborated in legal literature, the multi-dimensionality of discrimination is less explored in the machine learning community. Recent approaches in this direction mainly follow the so-called intersectional fairness definition from the legal domain, whereas other notions like additive and sequential discrimination are less studied or not considered thus far. In this work, we overview the different definitions of multi-dimensional discrimination/fairness in the legal domain as well as how they have been transferred/ operationalized (if) in the fairness-aware machine learning domain. By juxtaposing these two domains, we draw the connections, identify the limitations, and point out open research directions.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Netherlands > Limburg > Maastricht (0.04)
- Europe > Germany > Lower Saxony > Hanover (0.04)
- (17 more...)
- Workflow (0.46)
- Overview (0.46)
- Research Report (0.40)
New York City Artificial Intelligence Laws
Newly emerging artificial intelligence (AI) technologies could hold a promising solution to streamlining certain employment practices and processes of hiring applicants in a number of different industries. Historically, both federal courts and regulatory enforcement agencies have been opposed to the overall usage of AI tools, having scrutinized them heavily under local, state and federal anti-discrimination laws. In what was a welcome piece of news for New York-based employers, the New York City Department of Consumer and Worker Protection recently published a set of proposed rules that could drastically reshape the process of hiring and employment-based legislation still awaiting approval. For city employers who heavily utilize automated employment decision tools (AEDT) for hiring, these proposed rules will provide some initial guidance on the laws surrounding artificial intelligence, with hopes of clarifying the ambiguous AI law the city enacted back in 2021. The law, which won't fully go into effect until January 1, 2023, prohibits employers from using any form of AEDT unless a bias audit is completed by an independently sourced auditor and notice requirements are fully met.
AI Screens of Pandemic Job Seekers Could Lead to Bias Claims (1)
Companies are making more use of algorithmic hiring tools to screen a flood of job applicants during the coronavirus pandemic amid questions about whether they introduce new forms of bias into the early vetting process. The tools are designed to more efficiently filter out candidates that don't meet certain job-related criteria, like prior work experience, and to recruit potential hires via their online profiles. Businesses like HireVue offer biometric scanning tools that give applicant feedback based on facial expressions, while others like Pymetrics use behavioral tests to home in on ideal candidates. Companies including Colgate-Palmolive Co., McDonald's Corp., Boston Consulting Group Inc., PricewaterhouseCoopers LLP, and Kraft Heinz Co. are using them at a time when 21 million people in the U.S. were without jobs and seeking employment in May, according to the Labor Department. Job candidates might be unable or unwilling to apply and interview in person because of rules limiting social gatherings, said Monica Snyder, a workplace privacy attorney at Fisher Phillips in Boston.
- North America > United States > New York (0.05)
- North America > United States > Maryland (0.05)
- North America > United States > Illinois (0.05)
- North America > United States > California > Alameda County > Berkeley (0.05)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Information Technology (0.97)
- Consumer Products & Services (0.90)
Fairness in algorithmic decision-making
Algorithmic or automated decision systems use data and statistical analyses to classify people for the purpose of assessing their eligibility for a benefit or penalty. Such systems have been traditionally used for credit decisions, and currently are widely used for employment screening, insurance eligibility, and marketing. They are also used in the public sector, including for the delivery of government services, and in criminal justice sentencing and probation decisions. Most of these automated decision systems rely on traditional statistical techniques like regression analysis. Recently, though, these systems have incorporated machine learning to improve their accuracy and fairness. These advanced statistical techniques seek to find patterns in data without requiring the analyst to specify in advance which factors to use. They will often find new, unexpected connections that might not be obvious to the analyst or follow from a common sense or theoretic understanding of the subject matter. As a result, they can help to discover new factors that improve the accuracy of eligibility predictions and the decisions based on them. In many cases, they can also improve the fairness of these decisions, for instance, by expanding the pool of qualified job applicants to improve the diversity of a company's workforce.
- Health & Medicine (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Law > Civil Rights & Constitutional Law (0.96)
- Banking & Finance > Credit (0.88)
On the Legal Compatibility of Fairness Definitions
Xiang, Alice, Raji, Inioluwa Deborah
Past literature has been effective in demonstrating ideological gaps in machine learning (ML) fairness definitions when considering their use in complex socio-technical systems. However, we go further to demonstrate that these definitions often misunderstand the legal concepts from which they purport to be inspired, and consequently inappropriately co-opt legal language. In this paper, we demonstrate examples of this misalignment and discuss the differences in ML terminology and their legal counterparts, as well as what both the legal and ML fairness communities can learn from these tensions. We focus this paper on U.S. anti-discrimination law since the ML fairness research community regularly references terms from this body of law.
- North America > United States > California > San Francisco County > San Francisco (0.15)
- North America > United States > Texas (0.05)
- North America > Canada (0.04)
- Africa > South Sudan > Equatoria > Central Equatoria > Juba (0.04)
- Law > Labor & Employment Law (1.00)
- Law > Civil Rights & Constitutional Law (1.00)
- Government > Regional Government > North America Government > United States Government (0.47)